NeuralGrasps: Learning Implicit Representations for Grasps of Multiple Robotic Hands. (arXiv:2207.02959v1 [cs.RO] CROSS LISTED)
We introduce a neural implicit representation for grasps of objects from
multiple robotic hands. Different grasps across multiple robotic hands are
encoded into a shared latent space. Each latent vector is learned to decode to
the 3D shape of an object and the 3D shape of a robotic hand in a grasping pose
in terms of the signed distance functions of the two 3D shapes. In addition,
the distance metric in the latent space is learned to preserve the similarity
between grasps across different robotic hands, where the similarity of grasps
is defined according to contact regions of the robotic hands. This property
enables our method to transfer grasps between different grippers including a
human hand, and grasp transfer has the potential to share grasping skills
between robots and enable robots to learn grasping skills from humans.
Furthermore, the encoded signed distance functions of objects and grasps in our
implicit representation can be used for 6D object pose estimation with grasping
contact optimization from partial point clouds, which enables robotic grasping
in the real world.
( 2
min )